Search for a command to run...
In this episode, Sebastian Borgeaud, a pre-training lead for Gemini 3 at Google DeepMind, discusses the landmark model's development, exploring the shift from "infinite data" to a data-limited regime, the importance of research taste, and the evolving landscape of AI pre-training and model capabilities.
In a wide-ranging interview, Łukasz Kaiser, a key architect of modern AI, explains why AI progress continues to advance smoothly, highlighting the shift from pre-training to reasoning models and the potential of multimodal AI, robots, and generalization.
Nathan Lambert and Luca Soldaini from AI2 discuss the release of OLMo 3, a fully open-source AI model that provides unprecedented transparency into model training, highlighting the complex process of developing reasoning AI and the importance of open-source efforts in the global AI landscape.
Nathan Benaich discusses the 2025 State of AI Report, highlighting breakthroughs in AI reasoning, robotics, business adoption, power infrastructure challenges, and geopolitical dynamics shaping the AI landscape.
Julian Schrittwieser from Anthropic discusses the exponential trajectory of AI capabilities, predicting that models will achieve full-day autonomous task completion by 2026 and expert-level performance across many professions by 2027, while exploring how pre-training combined with reinforcement learning enables AI agents to make novel scientific discoveries and potentially earn Nobel Prizes.